519 research outputs found

    Convergence analysis of an Inexact Infeasible Interior Point method for Semidefinite Programming

    Get PDF
    In this paper we present an extension to SDP of the well known infeasible Interior Point method for linear programming of Kojima,Megiddo and Mizuno (A primal-dual infeasible-interior-point algorithm for Linear Programming, Math. Progr., 1993). The extension developed here allows the use of inexact search directions; i.e., the linear systems defining the search directions can be solved with an accuracy that increases as the solution is approached. A convergence analysis is carried out and the global convergence of the method is prove

    Adaptive Regularization Algorithms with Inexact Evaluations for Nonconvex Optimization

    Get PDF
    A regularization algorithm using inexact function values and inexact derivatives is proposed and its evaluation complexity analyzed. This algorithm is applicable to unconstrained problems and to problems with inexpensive constraints (that is constraints whose evaluation and enforcement has negligible cost) under the assumption that the derivative of highest degree is Ī²\beta-H\"{o}lder continuous. It features a very flexible adaptive mechanism for determining the inexactness which is allowed, at each iteration, when computing objective function values and derivatives. The complexity analysis covers arbitrary optimality order and arbitrary degree of available approximate derivatives. It extends results of Cartis, Gould and Toint (2018) on the evaluation complexity to the inexact case: if a qqth order minimizer is sought using approximations to the first pp derivatives, it is proved that a suitable approximate minimizer within Ļµ\epsilon is computed by the proposed algorithm in at most O(Ļµāˆ’p+Ī²pāˆ’q+Ī²)O(\epsilon^{-\frac{p+\beta}{p-q+\beta}}) iterations and at most O(āˆ£logā”(Ļµ)āˆ£Ļµāˆ’p+Ī²pāˆ’q+Ī²)O(|\log(\epsilon)|\epsilon^{-\frac{p+\beta}{p-q+\beta}}) approximate evaluations. An algorithmic variant, although more rigid in practice, can be proved to find such an approximate minimizer in O(āˆ£logā”(Ļµ)āˆ£+Ļµāˆ’p+Ī²pāˆ’q+Ī²)O(|\log(\epsilon)|+\epsilon^{-\frac{p+\beta}{p-q+\beta}}) evaluations.While the proposed framework remains so far conceptual for high degrees and orders, it is shown to yield simple and computationally realistic inexact methods when specialized to the unconstrained and bound-constrained first- and second-order cases. The deterministic complexity results are finally extended to the stochastic context, yielding adaptive sample-size rules for subsampling methods typical of machine learning.Comment: 32 page

    Adaptive Regularization for Nonconvex Optimization Using Inexact Function Values and Randomly Perturbed Derivatives

    Get PDF
    A regularization algorithm allowing random noise in derivatives and inexact function values is proposed for computing approximate local critical points of any order for smooth unconstrained optimization problems. For an objective function with Lipschitz continuous pp-th derivative and given an arbitrary optimality order qā‰¤pq \leq p, it is shown that this algorithm will, in expectation, compute such a point in at most O((minā”jāˆˆ{1,ā€¦,q}Ļµj)āˆ’p+1pāˆ’q+1)O\left(\left(\min_{j\in\{1,\ldots,q\}}\epsilon_j\right)^{-\frac{p+1}{p-q+1}}\right) inexact evaluations of ff and its derivatives whenever qāˆˆ{1,2}q\in\{1,2\}, where Ļµj\epsilon_j is the tolerance for jjth order accuracy. This bound becomes at most O((minā”jāˆˆ{1,ā€¦,q}Ļµj)āˆ’q(p+1)p)O\left(\left(\min_{j\in\{1,\ldots,q\}}\epsilon_j\right)^{-\frac{q(p+1)}{p}}\right) inexact evaluations if q>2q>2 and all derivatives are Lipschitz continuous. Moreover these bounds are sharp in the order of the accuracy tolerances. An extension to convexly constrained problems is also outlined.Comment: 22 page

    Updating constraint preconditioners for KKT systems in quadratic programming via low-rank corrections

    Get PDF
    This work focuses on the iterative solution of sequences of KKT linear systems arising in interior point methods applied to large convex quadratic programming problems. This task is the computational core of the interior point procedure and an efficient preconditioning strategy is crucial for the efficiency of the overall method. Constraint preconditioners are very effective in this context; nevertheless, their computation may be very expensive for large-scale problems, and resorting to approximations of them may be convenient. Here we propose a procedure for building inexact constraint preconditioners by updating a "seed" constraint preconditioner computed for a KKT matrix at a previous interior point iteration. These updates are obtained through low-rank corrections of the Schur complement of the (1,1) block of the seed preconditioner. The updated preconditioners are analyzed both theoretically and computationally. The results obtained show that our updating procedure, coupled with an adaptive strategy for determining whether to reinitialize or update the preconditioner, can enhance the performance of interior point methods on large problems.Comment: 22 page

    On affine scaling inexact dogleg methods for bound-constrained nonlinear systems

    Get PDF
    Within the framework of affine scaling trust-region methods for bound constrained problems, we discuss the use of a inexact dogleg method as a tool for simultaneously handling the trust-region and the bound constraints while seeking for an approximate minimizer of the model. Focusing on bound-constrained systems of nonlinear equations, an inexact affine scaling method for large scale problems, employing the inexact dogleg procedure, is described. Global convergence results are established without any Lipschitz assumption on the Jacobian matrix, and locally fast convergence is shown under standard assumptions. Convergence analysis is performed without specifying the scaling matrix used to handle the bounds, and a rather general class of scaling matrices is allowed in actual algorithms. Numerical results showing the performance of the method are also given

    A matrix-free preconditioner for sparse symmetric positive definite systems and least-squares problems

    Get PDF

    Partially Updated Switching-Method for systems of nonlinear equations

    Get PDF
    AbstractA hybrid method for solving systems of n nonlinear equations is given. The method does not use derivative information and is especially attractive when good starting points are not available and the given system is expensive to evaluate. It is shown that, after a few steps, each iteration requires (2k + 1) function evaluations where k, 1 ā©½ k ā©½ n, is chosen so as to have an efficient algorithm. Global convergence results are given and superlinear convergence is established. Some numerical results show the numerical performance of the proposed method

    A Relaxed Interior Point Method for Low-Rank Semidefinite Programming Problems with Applications to Matrix Completion

    Get PDF
    A new relaxed variant of interior point method for low-rank semidefinite programming problems is proposed in this paper. The method is a step outside of the usual interior point framework. In anticipation to converging to a low-rank primal solution, a special nearly low-rank form of all primal iterates is imposed. To accommodate such a (restrictive) structure, the first order optimality conditions have to be relaxed and are therefore approximated by solving an auxiliary least-squares problem. The relaxed interior point framework opens numerous possibilities how primal and dual approximated Newton directions can be computed. In particular, it admits the application of both the first- and the second-order methods in this context. The convergence of the method is established. A prototype implementation is discussed and encouraging preliminary computational results are reported for solving the SDP-reformulation of matrix-completion problems

    An optimally fast objective-function-free minimization algorithm using random subspaces

    Full text link
    An algorithm for unconstrained non-convex optimization is described, which does not evaluate the objective function and in which minimization is carried out, at each iteration, within a randomly selected subspace. It is shown that this random approximation technique does not affect the method's convergence nor its evaluation complexity for the search of an Ļµ\epsilon-approximate first-order critical point, which is O(Ļµāˆ’(p+1)/p)\mathcal{O}(\epsilon^{-(p+1)/p}), where pp is the order of derivatives used. A variant of the algorithm using approximate Hessian matrices is also analyzed and shown to require at most O(Ļµāˆ’2)\mathcal{O}(\epsilon^{-2}) evaluations. Preliminary numerical tests show that the random-subspace technique can significantly improve performance on some problems, albeit, unsurprisingly, not for all.Comment: 23 page
    • ā€¦
    corecore